149 research outputs found

    Privacy Spaces

    Get PDF
    Privacy literature contains conceptualizations of privacy in relation to role-playing and identity construction, and in relation to access control and boundary-management. In this paper, I combine both strands to introduce the concept of privacy spaces: spaces in which you can play, in your own way, the relevant role(s) you have in social life. Drawing from privacy conceptions in legal scholarship, philosophy, sociology, anthropology, human geography, and psychology, a systematic overview of traditional privacy spaces is offered, including mental bubbles, the body, personal space, personal writings, the home, private conversation space, cars, stalls, intimacy bubbles, professional black boxes, coffee house spaces, public places, and political privacy places. This overview yields important insights: privacy is an infrastructural condition relevant in all zones of social life (from personal to public); privacy boundaries can be visible or invisible, fluid or stable, impenetrable or permeable; privacy protection relies on complementary mechanisms of access restriction and discretion (a distinction that captures privacy protection more accurately than that between access and control); and, most importantly, privacy protection is primarily a process of social regulation rather than legal regulation. These insights are used to briefly discuss why digital, online, and onlife spaces pose privacy challenges. While traditional spaces of social interactions are being scrambled and rehashed into digital and onlife spaces, associated social norms do not necessarily co-evolve. Because digital spaces are often interconnected and interoperable, fewer boundaries avail to clearly delimit privacy boundaries, and digital spaces more often trigger different partial identities than traditional spaces do. Moreover, the co-habitation of service providers in digital spaces contrasts with traditional physical spaces, where space providers do not usually or systematically observe what people do. Thus, digital, or onlife, impression management virtually requires people to be aware of all their selves all of the time, severely hampering their feeling they can safely be themselves” in any given situation, and leading to a demise of backstage spaces where people can relax from impression management

    \u27Code\u27 and the Slow Erosion of Privacy

    Get PDF
    The notion of software code replacing legal code as a mechanism to control human behavior-- code as law --is often illustrated with examples in intellectual property and freedom of speech. This Article examines the neglected issue of the impact of code as law on privacy. To what extent is privacy-related code being used, either to undermine or to enhance privacy? On the basis of cases in the domains of law enforcement, national security, E-government, and commerce, it is concluded that technology rarely incorporates specific privacy-related norms. At the same time, however, technology very often does have clear effects on privacy, as it affects the reasonable expectation of privacy. Technology usually makes privacy violations easier. Particularly information technology is much more a technology of control than it is a technology of freedom. Privacy-enhancing technologies (PETs) have yet to be implemented on any serious scale. The consequent eroding effect of technology on privacy is a slow, hardly perceptible process. If one is to stop this almost natural process, a concerted effort is called for, possibly in the form of privacy impact assessments, enhanced control mechanisms, and awareness-raising

    \u27Code\u27 and the Slow Erosion of Privacy

    Get PDF
    The notion of software code replacing legal code as a mechanism to control human behavior-- code as law --is often illustrated with examples in intellectual property and freedom of speech. This Article examines the neglected issue of the impact of code as law on privacy. To what extent is privacy-related code being used, either to undermine or to enhance privacy? On the basis of cases in the domains of law enforcement, national security, E-government, and commerce, it is concluded that technology rarely incorporates specific privacy-related norms. At the same time, however, technology very often does have clear effects on privacy, as it affects the reasonable expectation of privacy. Technology usually makes privacy violations easier. Particularly information technology is much more a technology of control than it is a technology of freedom. Privacy-enhancing technologies (PETs) have yet to be implemented on any serious scale. The consequent eroding effect of technology on privacy is a slow, hardly perceptible process. If one is to stop this almost natural process, a concerted effort is called for, possibly in the form of privacy impact assessments, enhanced control mechanisms, and awareness-raising

    Conclusions and recommendations

    Get PDF

    Location Tracking by Police: The Regulation of ‘Tireless and Absolute Surveillance’

    Get PDF
    Location information reveals people’s whereabouts, but can also tell much about their habits, preferences, and, ultimately, much of their private lives. Current surveillance technologies used in criminal investigation include many techniques to track someone’s movements; not all are equally intrusive. This raises the following questions: how do jurisdictions draw boundaries between lesser and more serious privacy intrusions? What factors play a role? How are geolocational privacy interests framed? In this Article, we answer these questions through a comparative analysis of location-tracking regulation in eight jurisdictions: Canada, Czechia, Germany, Italy, the Netherlands, Poland, the United Kingdom, and the United States. We analyze the legal status of location tracking through human observation, GPS tracking, cell-phone tracking, IMSI catchers (Stingrays), silent SMS, automated license-plate recognition, and directional Wi-Fi tracking in these countries. This results in highly context-dependent and case-specific assessments, in which eight factors play a role: use of a technical device, place, intensity, duration, degree of suspicion, object of tracking, covertness, and active generation of data. At a deeper level of analysis, we identify different conceptualizations of privacy underlying these assessments: not only classic privacy frames, such as communications secrecy, protection of home and body, and informational privacy, but also two new privacy frames: freedom of movement in combination with anonymity, and the mosaic theory. Thus, we discern a tentative but unmistakable shift in how lawmakers and courts assess the intrusiveness of location tracking, particularly of people’s movements in public space. Traditional privacy frames tend to downplay the seriousness of the privacy infringement enabled by location tracking, and our analysis demonstrates an increasing discomfort with this tendency, leading to the emergence of novel privacy frames (or theories) to regulate what might easily turn into what the Supreme Court of the United States has called “tireless and absolute surveillance.” We conclude that legal privacy frameworks developed in past centuries prove ill-suited for assessing the privacy-intrusiveness of contemporary location-tracking investigation methods, and that emerging, novel frameworks for understanding and protecting privacy may provide lawmakers and courts with the tools needed to address the challenge of preserving (geolocational) privacy in the twenty-first century

    Location Tracking by Police: The Regulation of ‘Tireless and Absolute Surveillance’

    Get PDF
    Location information reveals people’s whereabouts, but can also tell much about their habits, preferences, and, ultimately, much of their private lives. Current surveillance technologies used in criminal investigation include many techniques to track someone’s movements; not all are equally intrusive. This raises the following questions: how do jurisdictions draw boundaries between lesser and more serious privacy intrusions? What factors play a role? How are geolocational privacy interests framed? In this Article, we answer these questions through a comparative analysis of location-tracking regulation in eight jurisdictions: Canada, Czechia, Germany, Italy, the Netherlands, Poland, the United Kingdom, and the United States. We analyze the legal status of location tracking through human observation, GPS tracking, cell-phone tracking, IMSI catchers (Stingrays), silent SMS, automated license-plate recognition, and directional Wi-Fi tracking in these countries. This results in highly context-dependent and case-specific assessments, in which eight factors play a role: use of a technical device, place, intensity, duration, degree of suspicion, object of tracking, covertness, and active generation of data. At a deeper level of analysis, we identify different conceptualizations of privacy underlying these assessments: not only classic privacy frames, such as communications secrecy, protection of home and body, and informational privacy, but also two new privacy frames: freedom of movement in combination with anonymity, and the mosaic theory. Thus, we discern a tentative but unmistakable shift in how lawmakers and courts assess the intrusiveness of location tracking, particularly of people’s movements in public space. Traditional privacy frames tend to downplay the seriousness of the privacy infringement enabled by location tracking, and our analysis demonstrates an increasing discomfort with this tendency, leading to the emergence of novel privacy frames (or theories) to regulate what might easily turn into what the Supreme Court of the United States has called “tireless and absolute surveillance.” We conclude that legal privacy frameworks developed in past centuries prove ill-suited for assessing the privacy-intrusiveness of contemporary location-tracking investigation methods, and that emerging, novel frameworks for understanding and protecting privacy may provide lawmakers and courts with the tools needed to address the challenge of preserving (geolocational) privacy in the twenty-first century

    Towards a code of criminal procedure 2030 and beyond

    Get PDF

    Model code of conduct on mitigating botnets and infected machines

    Get PDF
    This document presents a model Code of Conduct (CoC) on botnet mitigation, which can be used by private parties as a self-regulatory instrument and/or by parties collaborating in Public-Private Partnerships. This model CoC was developed in the context of the BotLeg project, to facilitate public-private collaboration in the field of cybersecurity, in particular actions focused on preventing and combating botnets and infected machines. This document comprises key components of a Code of Conduct, or statement of Best Practices, that participants can adopt in order to clarify their commitment to mitigating botnets and infected machines. The components can be reformulated, refined, and elaborated where desirable, for instance with a view to the specifics of a sector, jurisdiction, or type of organization in which the participants are active
    corecore